Learning Eigenvectors for Free

نویسندگان

  • Wouter M. Koolen
  • Wojciech Kotlowski
  • Manfred K. Warmuth
چکیده

We extend the classical problem of predicting a sequence of outcomes from a finite alphabet to the matrix domain. In this extension, the alphabet of n outcomes is replaced by the set of all dyads, i.e. outer products uu> where u is a vector in R of unit length. Whereas in the classical case the goal is to learn (i.e. sequentially predict as well as) the best multinomial distribution, in the matrix case we desire to learn the density matrix that best explains the observed sequence of dyads. We show how popular online algorithms for learning a multinomial distribution can be extended to learn density matrices. Intuitively, learning the n parameters of a density matrix is much harder than learning the n parameters of a multinomial distribution. Completely surprisingly, we prove that the worst-case regrets of certain classical algorithms and their matrix generalizations are identical. The reason is that the worst-case sequence of dyads share a common eigensystem, i.e. the worst case regret is achieved in the classical case. So these matrix algorithms learn the eigenvectors without any regret.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Semi-supervised eigenvectors for large-scale locally-biased learning

In many applications, one has side information, e.g., labels that are provided in a semisupervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks “nearby” that prespecified target region. For example, one might be interested in the clustering structure of a data graph near a prespecified “seed set” of nodes, or one m...

متن کامل

Semi-supervised Eigenvectors for Locally-biased Learning

In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks “nearby” that pre-specified target region. Locally-biased problems of this sort are particularly challenging for popular eigenvector-based machine learning and data analys...

متن کامل

Norm-Free Radon-Nikodym Approach to Machine Learning

For Machine Learning (ML) classification problem, where a vector of x–observations (values of attributes) is mapped to a single y value (class label), a generalized Radon– Nikodym type of solution is proposed. Quantum–mechanics –like probability states ψ2(x) are considered and “Cluster Centers”, corresponding to the extremums of < yψ2(x) > / < ψ2(x) >, are found from generalized eigenvalues pro...

متن کامل

نوسانات آزاد سیارات شبه زمین در حضور میدان مغناطیسی

  we study the free oscillations of a non-rotating earth-like planet in the presence of a force free magnetic field. The model consists of a solid inner core, a liquid outer core and a solid mantle which is spherically symmetric. The lagrangian displacements are decomposed into scaloidal, poloidal and toroidal components using a gauged version of Helmholtz theorem. These components are identifi...

متن کامل

Mesh Segmentation Using Laplacian Eigenvectors and Gaussian Mixtures

In this paper a new completely unsupervised mesh segmentation algorithm is proposed, which is based on the PCA interpretation of the Laplacian eigenvectors of the mesh and on parametric clustering using Gaussian mixtures. We analyse the geometric properties of these vectors and we devise a practical method that combines single-vector analysis with multiple-vector analysis. We attempt to charact...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011